Camouflaged Object Detection (COD) aims to detect objects hidden in complex environments. The existing COD algorithms ignore the influence of feature expression and fusion methods on detection performance when combining multi-level features. Therefore, a COD algorithm based on progressive feature enhancement aggregation was proposed. Firstly, multi-level features were extracted through the backbone network. Then, in order to improve the expression ability of features, an enhancement network composed of Feature Enhancement Module (FEM) was used to enhance the multi-level features. Finally, Adjacency Aggregation Module (AAM) was designed in the aggregation network to achieve information fusion between adjacent features to highlight the features of the camouflaged object area, and a new Progressive Aggregation Strategy (PAS) was proposed to aggregate adjacent features in a progressive way to achieve effective multi-level feature fusion while suppressing noise. Experimental results on 3 public datasets show that the proposed algorithm achieves the best performance on 4 objective evaluation indexes compared with 12 state-of-the-art algorithms, especially on COD10K dataset, the weighted F-measure and the Mean Absolute Error (MAE) of the proposed algorithm reach 0.809 and 0.037 respectively. It can be seen that the proposed algorithm achieves better performance on COD tasks.
Forged and tampered data frames should be identified and filtered out to ensure network security and efficiency. However, the existing schemes usually fail to work when verification devices are attacked or maliciously controlled in the Software Defined Network (SDN). To solve the above problem, a blockchain-based data frame security verification mechanism was proposed. Firstly, a Proof of Frame Forwarding (PoFF) consensus algorithm was designed and used to build a lightweight blockchain system. Then, an efficient data frame security verifying scheme for SDN data frame was proposed on the basis of this blockchain system. Finally, a flexible semi-random verifying scheme was presented to balance the verification efficiency and the resource cost. Simulation results show that compared with the hash chain based verifying scheme, the proposed scheme decreases the missed detection rate significantly when an equal proportion of switches are maliciously controlled. Specifically, when the proportion is 40%, the decrease effect is very obvious, the missed detection rate can still be kept no more than 32% in the basic verification mode, and can be further reduced to 7% with the assistance of the semi-random verifying scheme. Both are much lower than the missed detection rate of 72% in the hash chain based verifying scheme, and the resource overhead and communication cost introduced by the proposed mechanism are within a reasonable range. Additionally, the proposed scheme can still maintain good verification performance and efficiency even when the SDN controller is completely unable to work.
The formation of carotid artery plaque is closely related to complex hemodynamic factors. The accurate simulation of complex flow conditions is of great significance for the clinical diagnosis of carotid artery plaque. In order to simulate the pulsating flow accurately, Large Eddy Simulation (LES) model was combined with Lattice Boltzmann Method (LBM) to construct a LBM-LES carotid artery simulation algorithm, and a real geometric model of carotid artery stenosis was established through medical image reconstruction software, thus the high-resolution numerical simulation of carotid artery stenosis flows was conducted. By calculating blood flow velocity and Wall Shear Stress (WSS), some meaningful flow results were obtained, proving the effectiveness of LBM-LES in the study of blood flow in the carotid artery narrow posterior. Based on the OpenMP programming environment, the parallel computation of the grid of ten million magnitude was carried out on the fully interconnected fat node of high-performance cluster machine. The results show that the LBM-LES carotid artery simulation algorithm has good parallel performance.
Graphlet Degree Vector (GDV) is an important method for studying biological networks, and can reveal the correlation between nodes in biological networks and their local network structures. However, with the increasing number of automorphic orbits that need to be researched and the expanding biological network scale, the time complexity of the GDV method will increase exponentially. To resolve this problem, based on the existing serial GDV method, the parallelization of GDV method based on Message Passing Interface (MPI) was realized. Besides, the GDV method was improved and the parallel optimization of the optimized method was realized. The calculation process was optimized to solve the problem of double counting when searching for automorphic orbits of different nodes by the improved method, at the same time, the tasks were allocated reasonably combining with the load balancing strategy. Experimental results of simulated network data and real biological network data indicate that parallel GDV method and the improved parallel GDV method both obtain better parallel performance, they can be widely applied to different types of networks with different scales, and have good scalability. As a result, they can effectively maintain the high efficiency of searching for automorphic orbits in the network.
Bernoulli-Gaussian (BG) model in Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing (EM-BG-AMP) algorithm is constrained by its symmetry and restricted in the approximation of the actual signal prior distribution. Gaussian-Mixture (GM) model in Expectation-Maximization Gaussian-Mixture Approximate Message Passing (EM-GM-AMP) algorithm is a high-order model of BG model and has quite high complexity. In order to solve these problems, the Bernoulli-Asymmetric-Gaussian (BAG) model was proposed. Based on the new model, by further derivation, the Expectation-Maximization Bernoulli-Asymmetric-Gaussian Approximate Message Passing (EM-BAG-AMP) algorithm was obtained. The main idea of the proposed algorithm was based on the assumption that the input signal obeyed the BAG model. Then the proposed algorithm used Generalized Approximate Message Passing (GAMP) to reconstruct signal and update the model parameters in iteration. The experimental results show that, when processing different images, compared to EM-BG-AMP,the time and the Peak Signal-to-Noise Ratio (PSNR) values of EM-BAG-AMP are increased respectively by 1.2% and 0.1-0.5 dB, especially in processing images with simple texture and obvious color difference changing, the PSNR values are increased by 0.4-0.5 dB. EM-BAG-AMP is the expansion and extension of EM-BG-AMP and can better adapt to the actual signal.
Because existing passenger-finding algorithms do not consider taxi's spatio-temporal context, a collaborative filtering recommendation algorithm of taxi passenger-finding based on spatio-temporal context was proposed. The proposed algorithm mapped potential passenger locations to space network, and introduced time delay factor to similarity measure to get the neighbor set which was similar to a target taxi's driving behavior. Based on location context, the proposed algorithm chose the target taxi's most interest potential passenger location from similar neighbor set. The experimental results on Fuzhou taxi trajectory data show that the proposed algorithm can get the best recommendation result when the time delay factor is 0.7. Meanwhile, compared to the traditional collaborative filtering recommendation algorithms, the proposed algorithm obtains better recommendation result under the neighbor sets with different size, which means the proposed algorithm is more accurate than the traditional collaborative filtering algorithms.
The existed trust models have two shortages in searching the trust path:firstly, factors affecting the trust value were not considered fully in the searching, or considered the same. Meanwhile, many algorithms ignored the importance of the interaction number when searching the trust path. In view of these problems, a searching algorithm of trust path based on graph theory was proposed. The concept of probability of honesty was put forward to weigh the credibility of the node further, and as the searching priority basis, it is more reasonable in the priority searching. Meanwhile it searched by filtering and used probability of multi-factors which affect the credibility of the node. The analyses of algorithm show that the complexity of the proposed algorithm is (n-m)2 magnitude, much lower than the original fine-grained algorithm which complexity is n2 magnitude. The experimental results show that the proposed algorithm can better filter out malicious nodes, improve the accuracy of the trust path search algorithms, and resist the attacks of malicious nodes.
The model parameters of Video Compressed Sensing of Linear Dynamic System (CS-LDS) can be estimated directly from random sampling data. If all video frames are sampled in the same way, the sampling data will be redundant. To solve this problem, an adaptive improvement algorithm based on adaptive compression sampling technology was proposed in this paper. Firstly, a Linear Dynamic System (LDS) model of the video signal was established. And then the sampling data of video signal was obtained by using the adaptive compression sampling method. Finally, the model parameters were estimated and the video signal was reconstructed by the sampling data. Without affecting the video reconstruction quality, the experimental results show that the proposed algorithm is better than the CS-LDS algorithm, it can not only reduce 20%-40% sampling data in the uniform measurement process, but also save the average running time of 0.1-0.3 s per frame. The improved algorithm reduces the number of samples and the algorithm's running time.
The traditional testing process does not specifically consider the system performance. With the wide application of parallel testing method, more attention was paid to the system performance and data throughput capacity. Optimizing the software design with multithreading technology becomes an effective way to improve the performance of automatic test system. By modeling testing pipeline process, using asynchronous pipeline design patterns and combining task-oriented concepts, an available test system programming model was proposed. The experiment results prove that the model can significantly shorten the average test time in the ideal case of random input of test tasks. Applying this model to an instance of measuring characteristic parameters of Alternating Current (AC) contactor, the results further indicate that this model can significantly increase the flexibility of test configuration and avoid the complexity of multi-threaded programming.
Abstract: To overcome the shortcoming that random measurement matrix is hard for hardware implementation due to its randomly generated elements, a new structural and sparse deterministic measurement matrix was proposed by studying the theory of measurement matrix in Compressed Sensing (CS). The new matrix was based on parity check matrix in Quasi-Cyclic Low Density Parity Check (QC-LDPC) code over finite field. Due to the good channel decoding performance of QC-LDPC code, the CS measurement matrix based on it was expected to have good performance. To verify the performance of the new matrix, CS reconstruction experiments aiming at one-dimensional signals and two-dimensional signals were conducted. The experimental results show that, compared with the commonly used matrices, the proposed matrix has lower reconstruction error under the same reconstruction algorithm and compression ratio. The proposed method achieves certain improvement (about 0.5-1dB) in Peak Signal-to-Noise Ratio (PSNR). Especially, if the new matrix is applied to hardware implementation, the need for physical storage space and the complexity of the hardware implementation should be greatly reduced due to the quasi-cyclic and symmetric properties in the structure.